15 research outputs found

    Scale-aware Super-resolution Network with Dual Affinity Learning for Lesion Segmentation from Medical Images

    Full text link
    Convolutional Neural Networks (CNNs) have shown remarkable progress in medical image segmentation. However, lesion segmentation remains a challenge to state-of-the-art CNN-based algorithms due to the variance in scales and shapes. On the one hand, tiny lesions are hard to be delineated precisely from the medical images which are often of low resolutions. On the other hand, segmenting large-size lesions requires large receptive fields, which exacerbates the first challenge. In this paper, we present a scale-aware super-resolution network to adaptively segment lesions of various sizes from the low-resolution medical images. Our proposed network contains dual branches to simultaneously conduct lesion mask super-resolution and lesion image super-resolution. The image super-resolution branch will provide more detailed features for the segmentation branch, i.e., the mask super-resolution branch, for fine-grained segmentation. Meanwhile, we introduce scale-aware dilated convolution blocks into the multi-task decoders to adaptively adjust the receptive fields of the convolutional kernels according to the lesion sizes. To guide the segmentation branch to learn from richer high-resolution features, we propose a feature affinity module and a scale affinity module to enhance the multi-task learning of the dual branches. On multiple challenging lesion segmentation datasets, our proposed network achieved consistent improvements compared to other state-of-the-art methods.Comment: Journal paper under review. 10 pages. The first two authors contributed equall

    Deep Omni-supervised Learning for Rib Fracture Detection from Chest Radiology Images

    Full text link
    Deep learning (DL)-based rib fracture detection has shown promise of playing an important role in preventing mortality and improving patient outcome. Normally, developing DL-based object detection models requires huge amount of bounding box annotation. However, annotating medical data is time-consuming and expertise-demanding, making obtaining a large amount of fine-grained annotations extremely infeasible. This poses pressing need of developing label-efficient detection models to alleviate radiologists' labeling burden. To tackle this challenge, the literature of object detection has witnessed an increase of weakly-supervised and semi-supervised approaches, yet still lacks a unified framework that leverages various forms of fully-labeled, weakly-labeled, and unlabeled data. In this paper, we present a novel omni-supervised object detection network, ORF-Netv2, to leverage as much available supervision as possible. Specifically, a multi-branch omni-supervised detection head is introduced with each branch trained with a specific type of supervision. A co-training-based dynamic label assignment strategy is then proposed to enable flexibly and robustly learning from the weakly-labeled and unlabeled data. Extensively evaluation was conducted for the proposed framework with three rib fracture datasets on both chest CT and X-ray. By leveraging all forms of supervision, ORF-Netv2 achieves mAPs of 34.7, 44.7, and 19.4 on the three datasets, respectively, surpassing the baseline detector which uses only box annotations by mAP gains of 3.8, 4.8, and 5.0, respectively. Furthermore, ORF-Netv2 consistently outperforms other competitive label-efficient methods over various scenarios, showing a promising framework for label-efficient fracture detection.Comment: 11 pages, 4 figures, and 7 table

    Fast ScanNet : fast and dense analysis of multi-gigapixel whole-slide images for cancer metastasis detection

    Get PDF
    Lymph node metastasis is one of the most important indicators in breast cancer diagnosis, that is traditionally observed under the microscope by pathologists. In recent years, with the dramatic advance of high-throughput scanning and deep learning technology, automatic analysis of histology from whole- slide images has received a wealth of interest in the field of medical image computing, which aims to alleviate pathologists’ workload and simultaneously reduce misdiagnosis rate. However, automatic detection of lymph node metastases from whole-slide images remains a key challenge because such images are typically very large, where they can often be multiple gigabytes in size. Also, the presence of hard mimics may result in a large number of false positives. In this paper, we propose a novel method with anchor layers for model conversion, which not only leverages the efficiency of fully convolutional architectures to meet the speed requirement in clinical practice, but also densely scans the whole- slide image to achieve accurate predictions on both micro- and macro-metastases. Incorporating the strategies of asynchronous sample prefetching and hard negative mining, the network can be effectively trained. The efficacy of our method are corroborated on the benchmark dataset of 2016 Camelyon Grand Challenge. Our method achieved significant improvements in comparison with the state-of-the-art methods on tumour localization accuracy with a much faster speed and even surpassed human performance on both challenge tasks

    The Shared and Distinct White Matter Networks Between Drug-Naive Patients With Obsessive-Compulsive Disorder and Schizophrenia

    Get PDF
    Background: Obsessive-compulsive disorder (OCD) and schizophrenia (SZ) as two severe mental disorders share many clinical symptoms, and have a tight association on the psychopathological level. However, the neurobiological substrates between these two diseases remain unclear. To the best of our knowledge, no study has directly compared OCD with SZ from the perspective of white matter (WM) networks.Methods: Graph theory and network-based statistic methods were applied to diffusion MRI to investigate and compare the WM topological characteristics among 29 drug-naive OCDs, 29 drug-naive SZs, and 65 demographically-matched healthy controls (NC).Results: Compared to NCs, OCDs showed the alterations of nodal efficiency and strength in orbitofrontal (OFG) and middle frontal gyrus (MFG), while SZs exhibited widely-distributed abnormalities involving the OFG, MFG, fusiform gyrus, heschl gyrus, calcarine, lingual gyrus, putamen, and thalamus, and most of these regions also showed a significant difference from OCDs. Moreover, SZs had significantly fewer connections in striatum and visual/auditory cortices than OCDs. The right putamen consistently showed significant differences between both disorders on nodal characteristics and structural connectivity.Conclusions: SZ and OCD present different level of anatomical impairment and some distinct topological patterns, and the former has more serious and more widespread disruptions. The significant differences between both disorders are observed in many regions involving the frontal, temporal, occipital, and subcortical regions. Particularly, putamen may serve as a potential imaging marker to distinguish these two disorders and may be the key difference in their pathological changes

    From detection of individual metastases to classification of lymph node status at the patient level: the CAMELYON17 challenge

    Get PDF
    Automated detection of cancer metastases in lymph nodes has the potential to improve the assessment of prognosis for patients. To enable fair comparison between the algorithms for this purpose, we set up the CAMELYON17 challenge in conjunction with the IEEE International Symposium on Biomedical Imaging 2017 Conference in Melbourne. Over 300 participants registered on the challenge website, of which 23 teams submitted a total of 37 algorithms before the initial deadline. Participants were provided with 899 whole-slide images (WSIs) for developing their algorithms. The developed algorithms were evaluated based on the test set encompassing 100 patients and 500 WSIs. The evaluation metric used was a quadratic weighted Cohen's kappa. We discuss the algorithmic details of the 10 best pre-conference and two post-conference submissions. All these participants used convolutional neural networks in combination with pre- and postprocessing steps. Algorithms differed mostly in neural network architecture, training strategy, and pre- and postprocessing methodology. Overall, the kappa metric ranged from 0.89 to -0.13 across all submissions. The best results were obtained with pre-trained architectures such as ResNet. Confusion matrix analysis revealed that all participants struggled with reliably identifying isolated tumor cells, the smallest type of metastasis, with detection rates below 40%. Qualitative inspection of the results of the top participants showed categories of false positives, such as nerves or contamination, which could be targeted for further optimization. Last, we show that simple combinations of the top algorithms result in higher kappa metric values than any algorithm individually, with 0.93 for the best combination

    ORF-Net: Deep Omni-supervised Rib Fracture Detection from Chest CT Scans

    Full text link
    Most of the existing object detection works are based on the bounding box annotation: each object has a precise annotated box. However, for rib fractures, the bounding box annotation is very labor-intensive and time-consuming because radiologists need to investigate and annotate the rib fractures on a slice-by-slice basis. Although a few studies have proposed weakly-supervised methods or semi-supervised methods, they could not handle different forms of supervision simultaneously. In this paper, we proposed a novel omni-supervised object detection network, which can exploit multiple different forms of annotated data to further improve the detection performance. Specifically, the proposed network contains an omni-supervised detection head, in which each form of annotation data corresponds to a unique classification branch. Furthermore, we proposed a dynamic label assignment strategy for different annotated forms of data to facilitate better learning for each branch. Moreover, we also design a confidence-aware classification loss to emphasize the samples with high confidence and further improve the model's performance. Extensive experiments conducted on the testing dataset show our proposed method outperforms other state-of-the-art approaches consistently, demonstrating the efficacy of deep omni-supervised learning on improving rib fracture detection performance
    corecore